Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = 'samples/train.p'
testing_file = 'samples/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below.

Based on the above understanding of the dataset, below load sizes, coodrs and the mapping between label id to its description.

In [2]:
sizes_train, coords_train = train['sizes'], train['coords']
sizes_test, coords_test = test['sizes'], test['coords']
from signIDtoName import sign_id_to_name

The length of the first dimension of features should be the count of samples, while the second, the third are the width, and height of image, respectively. The labels, sizes, coods have the same dimension as the corresponding features'. They are associated with the corresponding features.

Sign_id_to_name provides mapping to all the signs, thus its cardinality is the number of signs.

In [3]:
import numpy as np

### Replace each question mark with the appropriate value.

# TODO: Number of training examples
n_train = np.shape(X_train)[0]

# TODO: Number of testing examples.
n_test = np.shape(X_test)[0]

# TODO: What's the shape of an traffic sign image?
image_shape = np.shape(X_train)[1:3]

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(sign_id_to_name)

input_depth = 3

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [4]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
from displaySamples import display_samples
# Visualizations will be shown in the notebook.
%matplotlib inline
#%load_ext autoreload # enable auto reload of modules
#%autoreload 1 # only reload those imported by aimport

Find the indices of all classes of traffic signs.

In [5]:
def sign_indices(labels, sign_map):
    indices = []
    for sign_id in sign_map.keys():
        index = -1
        for j in range(index+1, len(labels)):
            if labels[j] == sign_id:
                index = j
                indices.append(index)
                break
    return indices

Labels = {}
Labels['train'] = y_train
Labels['test'] = y_test

indices_train, indices_test = [sign_indices(Labels[t], sign_id_to_name) for t in ['train', 'test']]

Show the samples of all kinds of traffic signs in the training sets, in order to understand the visual appearance of the samples:

In [6]:
display_samples(X_train[indices_train], y_train[indices_train], '', np.array([]), '', 
                sign_id_to_name, columns=5, indices=slice(None, None))

The samples of all kinds of traffic signs in the test sets:

In [7]:
display_samples(X_test[indices_test], y_test[indices_test], '', np.array([]), '', 
                sign_id_to_name, columns=5, indices=slice(None, None))

Study the distribution of traffic sign classes

In [8]:
class_sample_freq, classes = np.histogram(y_train, bins=np.arange(n_classes + 1 ))
mean_freq = np.mean(class_sample_freq)
classes = classes[:-1]
classes_sorted_by_sample_count = np.argsort(class_sample_freq)
class_names_sorted = [sign_id_to_name[i] for i in classes_sorted_by_sample_count]
class_sample_freq_sorted = class_sample_freq[classes_sorted_by_sample_count]
In [9]:
bar_width = 0.3
plt.figure(figsize=(15, n_classes*bar_width))

y_pos = np.arange(len(class_names_sorted))
plt.barh(y_pos, class_sample_freq_sorted,  align='center', alpha=0.4) #xerr=error,
plt.yticks(y_pos, class_names_sorted)
plt.xlabel('Sample Count')
plt.title('Sample Count Distribution')
plt.plot((mean_freq, mean_freq), (0, n_classes), 'y-')
plt.show()
In [10]:
classes_under_represented = classes_sorted_by_sample_count[class_sample_freq_sorted < mean_freq]
print(str.format('The number of under represenetd signs: {}', np.size(classes_under_represented)))
print('The classes under represented from the least to the more (below mean sample count):')
[sign_id_to_name[i] for i in classes_under_represented]
The number of under represenetd signs: 26
The classes under represented from the least to the more (below mean sample count):
Out[10]:
['Speed limit (20km/h)',
 'Go straight or left',
 'Dangerous curve to the left',
 'End of all speed and passing limits',
 'Pedestrians',
 'End of no passing',
 'End of no passing by vehicles over 3.5 metric tons',
 'Road narrows on the right',
 'Bicycles crossing',
 'Keep left',
 'Double curve',
 'Roundabout mandatory',
 'Dangerous curve to the right',
 'Go straight or right',
 'Bumpy road',
 'End of speed limit (80km/h)',
 'Vehicles over 3.5 metric tons prohibited',
 'Turn left ahead',
 'Beware of ice/snow',
 'Slippery road',
 'Children crossing',
 'Traffic signals',
 'No vehicles',
 'Turn right ahead',
 'Stop',
 'Wild animals crossing']

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [11]:
### Preprocess the data here.
### Feel free to use as many code cells as needed.

Augment the traffic signs which are under-represented in the training samples

Upon training experiments, the traffic signs that have too small number of samples tend to be mis-classified, such as "Speed limit (20km/h)". In order to improve the classification of such under-represeneted. The training set is augumented for those traffic signs whose sample counts are below the mean of the sample counts of all signs. The under represented are augumented by simulating variations in image angle, lighiting, and so on from the existing samples of the same traffic sign. The number of augumentation is to increase its sample count to the mean and above by certain margin. The margin is a hyperparameter that can be adjusted.

The following code implements the augumentation. The procedure of transfor_image is courtesy of: https://nbviewer.jupyter.org/github/vxy10/SCND_notebooks/blob/master/preprocessing_stuff/img_transform_NB.ipynb

  • The maximum range of rotation is 20 degree.
  • The range of affine transformation sheer scale is 10
  • The range of translation is 5 pixels.
In [12]:
class_sample_freq, _ = np.histogram(y_train, bins=np.arange(n_classes + 1 ))

from imageTransform import transform_image
import math
augmentation_count = np.zeros(n_classes, dtype=np.int) # the counter to control number of augumenattions.
margin_from_mean = 1 # 0, -0.15, 1 double to the mean, to add more
diff_from_mean_with_margin = mean_freq*(1 + margin_from_mean) - class_sample_freq

def augument_sample(x, y):
    images_labels = [(x, y)]
    #sample_count = class_sample_freq[y]
    #diff_from_mean_with_margin = mean_freq*(1 + margin_from_mean) - sample_count
    if (0 < diff_from_mean_with_margin[y]) and (augmentation_count[y] <= diff_from_mean_with_margin[y] ):
         
        duplicatons_per_existing_sample = math.ceil(diff_from_mean_with_margin[y]/class_sample_freq[y])
        augmentation_count[y] = augmentation_count[y] + duplicatons_per_existing_sample
        
        # duplicate (x, y) by duplicatons_per_existing_sample times
        return images_labels  + [(transform_image(x, 20, 10, 5), y) for i in range(duplicatons_per_existing_sample)]
    else:
        return images_labels

Shuffle the training samples to let the seeds of augumentation random.

In [13]:
from sklearn.utils import shuffle

X_train, y_train = shuffle(X_train, y_train) # shuffle before augmentation.
In [14]:
# augument X_train based on y_train:
list_of_augumented = [augument_sample(x, y) for x, y in zip(X_train, y_train)]
X_train_aug = []
y_train_aug = []
for list_of_tuples in list_of_augumented:
    for x, y in list_of_tuples:
        X_train_aug = X_train_aug + [x]
        y_train_aug = y_train_aug + [y]
X_train_aug = np.array(X_train_aug)
y_train_aug = np.array(y_train_aug)

Here are some samples after augumentations.

In [15]:
display_samples(X_train_aug, y_train_aug, '', np.array([]), '', 
                sign_id_to_name, columns=5, indices=slice(0, 50))

Here are the distribution after augmentation.

Those under represeneted have been improved with amount around the original mean.

In [16]:
class_sample_aug_freq, classes_aug = np.histogram(y_train_aug, bins=np.arange(n_classes + 1 ))
mean_aug_freq = np.mean(class_sample_aug_freq)
classes_aug = classes_aug[:-1]

class_sample_aug_freq_sorted = class_sample_aug_freq[classes_sorted_by_sample_count]

bar_width = 0.3
plt.figure(figsize=(15, n_classes*bar_width))

y_pos = np.arange(len(class_names_sorted))
plt.barh(y_pos, class_sample_aug_freq_sorted,  align='center', alpha=0.4) #xerr=error,
plt.yticks(y_pos, class_names_sorted)
plt.xlabel('Sample Count')
plt.title('Augmented Sample Count Distribution')
plt.plot((mean_aug_freq, mean_aug_freq), (0, n_classes), 'y-')
plt.plot((mean_freq, mean_freq), (0, n_classes), 'b-')
plt.show()
In [17]:
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(X_train_aug, y_train_aug, test_size = 0.2, 
                                                                random_state = 0)

Note: the X_train, y_train are updated with augumented ones.

Question 1

Describe how you preprocessed the data. Why did you choose that technique?

Answer:

Some images are visually very dark, procedure of normalization, linearly scales image to have zero mean and unit norm, were performed designing to improve the classification of such images. The implementation of the pre-processing is implemented in the function LeNet below.

Using 1x1 convolution with depth 3 on the RGB input after the pre-processing of normalization, an adaptive pre-processing of the color space conductive for classification is implemented, in lieu of color space pre-processing. This is an attempt to machine learning to adaptively achieve color space pre-processing. As it's hard for me to decide what color scheme would produce optimal classification. My experiments shows it produces some improvement in the classification accuracy.

Finally, the training samples are shuffled to ensure the radomness in the sequence of training samples.

Question 2

Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?

Answer:

The test data is kept not touched until the last to produce the test accuracy.

The training data is split, 80% for training, and 20% for validation during training.

Out of the resulted training data, augumentation procedure were performed to correct the significant imbalance of sample counts of traffic signs.

To address the significant imbalance amoung the count of samples of triffic signs. Augmentations procedure were performed to add more samples for those under represeneted signs, in order to improve the accuracy of classifications to those signs. Experiments show indeed improvement to the classifications of some of those. For details, please read the section on sample augumentation.

In [18]:
### Desfine your architecture here.
### Feel free to use as many code cells as needed.

import tensorflow as tf
from tensorflow.contrib.layers import flatten

keep_prob = tf.placeholder(tf.float32) # probability to keep units
rate = 0.0003 # 0.007, 0.001, 0.07, 0.01, 0.0007, 0.0001
keep_prob_const = 0.5
weight_decay_rate = 0 # 0.1, 0.007
    
def LeNet(x):    
    # Hyperparameters
    mu = 0
    sigma = 0.1
    
    # pre-processing: zero mean, and unit norm.
    x_processed = tf.map_fn(lambda image: tf.image.per_image_standardization(image), x)
    
    # Layer 0: Color adaptation, convolutional. Input = 32x32x3, Output 32x32x3
    conv0_W = tf.Variable(tf.truncated_normal(shape=(1, 1, input_depth, input_depth), mean = mu, stddev = sigma))
    conv0_b = tf.Variable(tf.zeros(input_depth))
    conv0   = tf.nn.conv2d(x_processed, conv0_W, strides=[1, 1, 1, 1], padding='VALID') + conv0_b

    # Activation.
    conv0 = tf.nn.relu(conv0)

    # Layer 1: Convolutional. Input = 32x32x3. Output per kernel_size
    kernel1_size = 5 # 5, 6, 7, 4, 5,9 try large size kernel to encourage extraction of larger features, 5, 
    # 3 to try smaller kernel size whcih has been effective, 5 resume the previous which seems better
    
    conv1_depth = 47 # original 6, guess with more complex feature representations
    
    conv1_W = tf.Variable(tf.truncated_normal(shape=(kernel1_size, kernel1_size, input_depth, conv1_depth), 
                                              mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(conv1_depth))
    conv1   = tf.nn.conv2d(conv0, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # Activation.
    conv1 = tf.nn.relu(conv1)

    # Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    # conv1 = tf.nn.avg_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    # SOLUTION: Layer 2: Convolutional. Output = per kernel2_size
    kernel2_size = 5 # 5, 9
    conv2_W = tf.Variable(tf.truncated_normal(shape=(kernel2_size, kernel2_size, conv1_depth, 16), 
                                              mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # Activation.
    conv2 = tf.nn.relu(conv2)

    # Pooling.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # Flatten.
    fc0 = flatten(conv2)
    fc0_dim = fc0.get_shape().as_list()[1]
    
    # Layer 3: Fully Connected.
    fc1_output_width = 150 # 120 
    fc1_W = tf.Variable(tf.truncated_normal(shape=(fc0_dim, fc1_output_width), mean = mu, stddev = sigma) , name = 'fc1_W')
    fc1_b = tf.Variable(tf.zeros(fc1_output_width))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # Activation.
    fc1    = tf.nn.relu(fc1)
    
    # Drop-out.
    fc1 = tf.nn.dropout(fc1, keep_prob)

    # Layer 4: Fully Connected. Input = 120. Output = 84.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(fc1_output_width, 84), mean = mu, stddev = sigma), name = 'fc2_W')
    fc2_b  = tf.Variable(tf.zeros(84))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    # Activation.
    fc2    = tf.nn.relu(fc2)
    
    # Drop-out.
    # fc2 = tf.nn.dropout(fc2, keep_prob)

    # Layer 5: Fully Connected. Input = 84. Output = 10.
     
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma), name = 'fc3_W')
    fc3_b  = tf.Variable(tf.zeros(n_classes))
    
    # for weights regularization. 
    
    if weight_decay_rate:
        tf.add_to_collection('losses', tf.mul(tf.nn.l2_loss(fc1_W), weight_decay_rate, name='weight_loss'))
        tf.add_to_collection('losses', tf.mul(tf.nn.l2_loss(fc2_W), weight_decay_rate, name='weight_loss'))
        tf.add_to_collection('losses', tf.mul(tf.nn.l2_loss(fc3_W), weight_decay_rate, name='weight_loss'))
        
    logits = tf.matmul(fc2, fc3_W) + fc3_b 
    
    return logits

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer:

The architecture is based on LeNet only with the following adaptations:

  • The input has R, G, B channels, so the input depth should be 3.

  • The final output should have n_classes of outputs, for classifications of traffic signs (n_classes is the number of traffic sign types).

  • There is a convolution layer 0, with kernel size of 1x1 with depth of 3. It's designed to adapt from RGB pixel to learn proper color space conductive for classification.

  • For the orginal convolution layer 1, the kernel size has been increase to 9x9 in order to encourage bigger feature extraction. Also the depth is increased to 47 from the original 6, hoping to capture more complex features. It seems that LeNet is optimized for digit recognition, where the digit figure is much more dominant than the traffic signs. So maybe, the kernel size 5 may not be optimal for traffic sign? Increasing it might help the classifier to focus on bigger features, thus be more robust to variations in traffic signs images. Experiments confirm the intuition.

  • The rest of layers have dimensions adapted to account for the changes in the convolutino layer 1.

Drop-out were experimented to the fully connected layer 1, and layer 2. But I concluded that it does not help much on the training.

There is weights regularizations on the fully connected layers.

There is no other changes to the network architecture.

In [19]:
### Train your model here.
### Feel free to use as many code cells as needed.

import math
BATCH_SIZE = 5000 # 10000 ran out of GPU memory changed to 5000
EPOCHS = 1200 # from 400, 300, 1000, 600, 500, 1200
print(str.format('The EPOCHS equivalent to the original with increased BATHC_SIZE: {}', 
                 10*(math.floor(BATCH_SIZE/128))))
The EPOCHS equivalent to the original with increased BATHC_SIZE: 390

As there is plenty of memory the GPU, and main board, increasing the BATCH_SIZE from 128 to 10000 is feasible, and may help to improve on the gradient decent's accuracy, thus might result in faster training. With increased BATCH_SIZE, the EPOCHS needs to be increased, as only one batch will have gradient decent once.

In [20]:
x = tf.placeholder(tf.float32, (None, 32, 32, input_depth))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
In [21]:
logits = LeNet(x)

cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
cross_entropy_sum = tf.reduce_sum(cross_entropy, name='cross_entropy_sum')
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
tf.add_to_collection('losses', cross_entropy_mean)

loss_operation = tf.add_n(tf.get_collection('losses'), name='total_loss')

optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
In [22]:
#predict = tf.argmax(logits, 1)

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples

def cross_entropy_mean(X_data, y_data):
    num_examples = len(X_data)
    total_cross_entropy_mean = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        sum_batch = sess.run(cross_entropy_sum, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
        total_cross_entropy_mean += sum_batch
    return total_cross_entropy_mean/num_examples

The following variable control whether to start to re-train the classifier.

In [23]:
in_training = True
In [24]:
if in_training:
    epochs = []
    accuracies_training = []
    accuracies_validation = []
    cross_entropy_mean_training = []
    cross_entropy_mean_validation = []
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        num_examples = len(X_train)
    
        print("Training...")
        print()
        X_train, X_validation, y_train, y_validation = train_test_split(X_train_aug, y_train_aug, 
                                                                            test_size = 0.2 )
                                                                            #, random_state = 0
        for i in range(EPOCHS):
            # repeat swapping train and validation
            
            
            X_train, y_train = shuffle(X_train, y_train)
            for offset in range(0, num_examples, BATCH_SIZE):
                end = offset + BATCH_SIZE
                batch_x, batch_y = X_train[offset:end], y_train[offset:end]
                sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: keep_prob_const}) 
                cross_entropy_mean
        
            training_accuracy = evaluate(X_train, y_train)    
            validation_accuracy = evaluate(X_validation, y_validation)
        
            training_cross_entropy_mean = cross_entropy_mean(X_train, y_train)
            validation_cross_entropy_mean = cross_entropy_mean(X_validation, y_validation)
            
            epochs.append(i)
            accuracies_training.append(training_accuracy)
            accuracies_validation.append(validation_accuracy)
            cross_entropy_mean_training.append(training_cross_entropy_mean)
            cross_entropy_mean_validation.append(validation_cross_entropy_mean)
            
            print("EPOCH {} ...".format(i+1))
            print("Training Accuracy = {:.3f}".format(training_accuracy))
            print("Validation Accuracy = {:.3f}".format(validation_accuracy))
            print()
        
        saver.save(sess, 'lenet')
        print("Model saved")
Training...

EPOCH 1 ...
Training Accuracy = 0.062
Validation Accuracy = 0.061

EPOCH 2 ...
Training Accuracy = 0.086
Validation Accuracy = 0.081

EPOCH 3 ...
Training Accuracy = 0.126
Validation Accuracy = 0.118

EPOCH 4 ...
Training Accuracy = 0.175
Validation Accuracy = 0.164

EPOCH 5 ...
Training Accuracy = 0.234
Validation Accuracy = 0.225

EPOCH 6 ...
Training Accuracy = 0.284
Validation Accuracy = 0.276

EPOCH 7 ...
Training Accuracy = 0.338
Validation Accuracy = 0.329

EPOCH 8 ...
Training Accuracy = 0.378
Validation Accuracy = 0.370

EPOCH 9 ...
Training Accuracy = 0.414
Validation Accuracy = 0.407

EPOCH 10 ...
Training Accuracy = 0.445
Validation Accuracy = 0.436

EPOCH 11 ...
Training Accuracy = 0.475
Validation Accuracy = 0.466

EPOCH 12 ...
Training Accuracy = 0.501
Validation Accuracy = 0.490

EPOCH 13 ...
Training Accuracy = 0.520
Validation Accuracy = 0.508

EPOCH 14 ...
Training Accuracy = 0.539
Validation Accuracy = 0.526

EPOCH 15 ...
Training Accuracy = 0.561
Validation Accuracy = 0.547

EPOCH 16 ...
Training Accuracy = 0.576
Validation Accuracy = 0.564

EPOCH 17 ...
Training Accuracy = 0.589
Validation Accuracy = 0.574

EPOCH 18 ...
Training Accuracy = 0.600
Validation Accuracy = 0.589

EPOCH 19 ...
Training Accuracy = 0.618
Validation Accuracy = 0.605

EPOCH 20 ...
Training Accuracy = 0.625
Validation Accuracy = 0.614

EPOCH 21 ...
Training Accuracy = 0.640
Validation Accuracy = 0.629

EPOCH 22 ...
Training Accuracy = 0.650
Validation Accuracy = 0.641

EPOCH 23 ...
Training Accuracy = 0.662
Validation Accuracy = 0.650

EPOCH 24 ...
Training Accuracy = 0.669
Validation Accuracy = 0.659

EPOCH 25 ...
Training Accuracy = 0.681
Validation Accuracy = 0.669

EPOCH 26 ...
Training Accuracy = 0.687
Validation Accuracy = 0.673

EPOCH 27 ...
Training Accuracy = 0.698
Validation Accuracy = 0.685

EPOCH 28 ...
Training Accuracy = 0.709
Validation Accuracy = 0.695

EPOCH 29 ...
Training Accuracy = 0.715
Validation Accuracy = 0.702

EPOCH 30 ...
Training Accuracy = 0.721
Validation Accuracy = 0.709

EPOCH 31 ...
Training Accuracy = 0.729
Validation Accuracy = 0.717

EPOCH 32 ...
Training Accuracy = 0.736
Validation Accuracy = 0.722

EPOCH 33 ...
Training Accuracy = 0.741
Validation Accuracy = 0.730

EPOCH 34 ...
Training Accuracy = 0.746
Validation Accuracy = 0.734

EPOCH 35 ...
Training Accuracy = 0.753
Validation Accuracy = 0.742

EPOCH 36 ...
Training Accuracy = 0.761
Validation Accuracy = 0.748

EPOCH 37 ...
Training Accuracy = 0.766
Validation Accuracy = 0.752

EPOCH 38 ...
Training Accuracy = 0.771
Validation Accuracy = 0.758

EPOCH 39 ...
Training Accuracy = 0.776
Validation Accuracy = 0.763

EPOCH 40 ...
Training Accuracy = 0.781
Validation Accuracy = 0.767

EPOCH 41 ...
Training Accuracy = 0.785
Validation Accuracy = 0.771

EPOCH 42 ...
Training Accuracy = 0.788
Validation Accuracy = 0.774

EPOCH 43 ...
Training Accuracy = 0.794
Validation Accuracy = 0.780

EPOCH 44 ...
Training Accuracy = 0.798
Validation Accuracy = 0.782

EPOCH 45 ...
Training Accuracy = 0.801
Validation Accuracy = 0.785

EPOCH 46 ...
Training Accuracy = 0.805
Validation Accuracy = 0.790

EPOCH 47 ...
Training Accuracy = 0.809
Validation Accuracy = 0.793

EPOCH 48 ...
Training Accuracy = 0.812
Validation Accuracy = 0.795

EPOCH 49 ...
Training Accuracy = 0.814
Validation Accuracy = 0.797

EPOCH 50 ...
Training Accuracy = 0.819
Validation Accuracy = 0.802

EPOCH 51 ...
Training Accuracy = 0.822
Validation Accuracy = 0.807

EPOCH 52 ...
Training Accuracy = 0.825
Validation Accuracy = 0.807

EPOCH 53 ...
Training Accuracy = 0.828
Validation Accuracy = 0.809

EPOCH 54 ...
Training Accuracy = 0.830
Validation Accuracy = 0.813

EPOCH 55 ...
Training Accuracy = 0.834
Validation Accuracy = 0.815

EPOCH 56 ...
Training Accuracy = 0.835
Validation Accuracy = 0.817

EPOCH 57 ...
Training Accuracy = 0.838
Validation Accuracy = 0.819

EPOCH 58 ...
Training Accuracy = 0.842
Validation Accuracy = 0.822

EPOCH 59 ...
Training Accuracy = 0.842
Validation Accuracy = 0.822

EPOCH 60 ...
Training Accuracy = 0.844
Validation Accuracy = 0.825

EPOCH 61 ...
Training Accuracy = 0.847
Validation Accuracy = 0.828

EPOCH 62 ...
Training Accuracy = 0.850
Validation Accuracy = 0.829

EPOCH 63 ...
Training Accuracy = 0.853
Validation Accuracy = 0.833

EPOCH 64 ...
Training Accuracy = 0.853
Validation Accuracy = 0.833

EPOCH 65 ...
Training Accuracy = 0.856
Validation Accuracy = 0.837

EPOCH 66 ...
Training Accuracy = 0.858
Validation Accuracy = 0.837

EPOCH 67 ...
Training Accuracy = 0.860
Validation Accuracy = 0.840

EPOCH 68 ...
Training Accuracy = 0.861
Validation Accuracy = 0.843

EPOCH 69 ...
Training Accuracy = 0.864
Validation Accuracy = 0.844

EPOCH 70 ...
Training Accuracy = 0.866
Validation Accuracy = 0.845

EPOCH 71 ...
Training Accuracy = 0.868
Validation Accuracy = 0.847

EPOCH 72 ...
Training Accuracy = 0.869
Validation Accuracy = 0.849

EPOCH 73 ...
Training Accuracy = 0.871
Validation Accuracy = 0.850

EPOCH 74 ...
Training Accuracy = 0.874
Validation Accuracy = 0.852

EPOCH 75 ...
Training Accuracy = 0.876
Validation Accuracy = 0.855

EPOCH 76 ...
Training Accuracy = 0.875
Validation Accuracy = 0.855

EPOCH 77 ...
Training Accuracy = 0.879
Validation Accuracy = 0.857

EPOCH 78 ...
Training Accuracy = 0.879
Validation Accuracy = 0.858

EPOCH 79 ...
Training Accuracy = 0.879
Validation Accuracy = 0.857

EPOCH 80 ...
Training Accuracy = 0.883
Validation Accuracy = 0.861

EPOCH 81 ...
Training Accuracy = 0.884
Validation Accuracy = 0.862

EPOCH 82 ...
Training Accuracy = 0.886
Validation Accuracy = 0.865

EPOCH 83 ...
Training Accuracy = 0.886
Validation Accuracy = 0.862

EPOCH 84 ...
Training Accuracy = 0.887
Validation Accuracy = 0.865

EPOCH 85 ...
Training Accuracy = 0.889
Validation Accuracy = 0.867

EPOCH 86 ...
Training Accuracy = 0.892
Validation Accuracy = 0.869

EPOCH 87 ...
Training Accuracy = 0.893
Validation Accuracy = 0.871

EPOCH 88 ...
Training Accuracy = 0.894
Validation Accuracy = 0.871

EPOCH 89 ...
Training Accuracy = 0.894
Validation Accuracy = 0.871

EPOCH 90 ...
Training Accuracy = 0.895
Validation Accuracy = 0.872

EPOCH 91 ...
Training Accuracy = 0.895
Validation Accuracy = 0.873

EPOCH 92 ...
Training Accuracy = 0.896
Validation Accuracy = 0.873

EPOCH 93 ...
Training Accuracy = 0.898
Validation Accuracy = 0.875

EPOCH 94 ...
Training Accuracy = 0.900
Validation Accuracy = 0.877

EPOCH 95 ...
Training Accuracy = 0.901
Validation Accuracy = 0.879

EPOCH 96 ...
Training Accuracy = 0.902
Validation Accuracy = 0.879

EPOCH 97 ...
Training Accuracy = 0.901
Validation Accuracy = 0.879

EPOCH 98 ...
Training Accuracy = 0.903
Validation Accuracy = 0.881

EPOCH 99 ...
Training Accuracy = 0.906
Validation Accuracy = 0.882

EPOCH 100 ...
Training Accuracy = 0.907
Validation Accuracy = 0.883

EPOCH 101 ...
Training Accuracy = 0.906
Validation Accuracy = 0.885

EPOCH 102 ...
Training Accuracy = 0.908
Validation Accuracy = 0.884

EPOCH 103 ...
Training Accuracy = 0.908
Validation Accuracy = 0.885

EPOCH 104 ...
Training Accuracy = 0.909
Validation Accuracy = 0.885

EPOCH 105 ...
Training Accuracy = 0.910
Validation Accuracy = 0.885

EPOCH 106 ...
Training Accuracy = 0.911
Validation Accuracy = 0.885

EPOCH 107 ...
Training Accuracy = 0.912
Validation Accuracy = 0.887

EPOCH 108 ...
Training Accuracy = 0.913
Validation Accuracy = 0.890

EPOCH 109 ...
Training Accuracy = 0.914
Validation Accuracy = 0.890

EPOCH 110 ...
Training Accuracy = 0.915
Validation Accuracy = 0.890

EPOCH 111 ...
Training Accuracy = 0.915
Validation Accuracy = 0.891

EPOCH 112 ...
Training Accuracy = 0.916
Validation Accuracy = 0.893

EPOCH 113 ...
Training Accuracy = 0.916
Validation Accuracy = 0.892

EPOCH 114 ...
Training Accuracy = 0.918
Validation Accuracy = 0.892

EPOCH 115 ...
Training Accuracy = 0.917
Validation Accuracy = 0.892

EPOCH 116 ...
Training Accuracy = 0.920
Validation Accuracy = 0.894

EPOCH 117 ...
Training Accuracy = 0.920
Validation Accuracy = 0.895

EPOCH 118 ...
Training Accuracy = 0.920
Validation Accuracy = 0.896

EPOCH 119 ...
Training Accuracy = 0.921
Validation Accuracy = 0.895

EPOCH 120 ...
Training Accuracy = 0.921
Validation Accuracy = 0.896

EPOCH 121 ...
Training Accuracy = 0.921
Validation Accuracy = 0.896

EPOCH 122 ...
Training Accuracy = 0.925
Validation Accuracy = 0.899

EPOCH 123 ...
Training Accuracy = 0.925
Validation Accuracy = 0.898

EPOCH 124 ...
Training Accuracy = 0.924
Validation Accuracy = 0.898

EPOCH 125 ...
Training Accuracy = 0.926
Validation Accuracy = 0.899

EPOCH 126 ...
Training Accuracy = 0.926
Validation Accuracy = 0.900

EPOCH 127 ...
Training Accuracy = 0.927
Validation Accuracy = 0.902

EPOCH 128 ...
Training Accuracy = 0.926
Validation Accuracy = 0.902

EPOCH 129 ...
Training Accuracy = 0.928
Validation Accuracy = 0.902

EPOCH 130 ...
Training Accuracy = 0.929
Validation Accuracy = 0.903

EPOCH 131 ...
Training Accuracy = 0.929
Validation Accuracy = 0.904

EPOCH 132 ...
Training Accuracy = 0.930
Validation Accuracy = 0.902

EPOCH 133 ...
Training Accuracy = 0.930
Validation Accuracy = 0.905

EPOCH 134 ...
Training Accuracy = 0.931
Validation Accuracy = 0.904

EPOCH 135 ...
Training Accuracy = 0.932
Validation Accuracy = 0.906

EPOCH 136 ...
Training Accuracy = 0.933
Validation Accuracy = 0.906

EPOCH 137 ...
Training Accuracy = 0.932
Validation Accuracy = 0.905

EPOCH 138 ...
Training Accuracy = 0.933
Validation Accuracy = 0.906

EPOCH 139 ...
Training Accuracy = 0.932
Validation Accuracy = 0.906

EPOCH 140 ...
Training Accuracy = 0.933
Validation Accuracy = 0.907

EPOCH 141 ...
Training Accuracy = 0.934
Validation Accuracy = 0.908

EPOCH 142 ...
Training Accuracy = 0.935
Validation Accuracy = 0.910

EPOCH 143 ...
Training Accuracy = 0.936
Validation Accuracy = 0.909

EPOCH 144 ...
Training Accuracy = 0.936
Validation Accuracy = 0.910

EPOCH 145 ...
Training Accuracy = 0.936
Validation Accuracy = 0.909

EPOCH 146 ...
Training Accuracy = 0.935
Validation Accuracy = 0.910

EPOCH 147 ...
Training Accuracy = 0.938
Validation Accuracy = 0.911

EPOCH 148 ...
Training Accuracy = 0.938
Validation Accuracy = 0.910

EPOCH 149 ...
Training Accuracy = 0.939
Validation Accuracy = 0.912

EPOCH 150 ...
Training Accuracy = 0.940
Validation Accuracy = 0.913

EPOCH 151 ...
Training Accuracy = 0.939
Validation Accuracy = 0.912

EPOCH 152 ...
Training Accuracy = 0.940
Validation Accuracy = 0.913

EPOCH 153 ...
Training Accuracy = 0.941
Validation Accuracy = 0.913

EPOCH 154 ...
Training Accuracy = 0.940
Validation Accuracy = 0.912

EPOCH 155 ...
Training Accuracy = 0.941
Validation Accuracy = 0.913

EPOCH 156 ...
Training Accuracy = 0.943
Validation Accuracy = 0.914

EPOCH 157 ...
Training Accuracy = 0.942
Validation Accuracy = 0.915

EPOCH 158 ...
Training Accuracy = 0.942
Validation Accuracy = 0.914

EPOCH 159 ...
Training Accuracy = 0.943
Validation Accuracy = 0.914

EPOCH 160 ...
Training Accuracy = 0.944
Validation Accuracy = 0.914

EPOCH 161 ...
Training Accuracy = 0.944
Validation Accuracy = 0.916

EPOCH 162 ...
Training Accuracy = 0.944
Validation Accuracy = 0.916

EPOCH 163 ...
Training Accuracy = 0.945
Validation Accuracy = 0.917

EPOCH 164 ...
Training Accuracy = 0.946
Validation Accuracy = 0.917

EPOCH 165 ...
Training Accuracy = 0.946
Validation Accuracy = 0.918

EPOCH 166 ...
Training Accuracy = 0.946
Validation Accuracy = 0.916

EPOCH 167 ...
Training Accuracy = 0.944
Validation Accuracy = 0.917

EPOCH 168 ...
Training Accuracy = 0.947
Validation Accuracy = 0.918

EPOCH 169 ...
Training Accuracy = 0.946
Validation Accuracy = 0.918

EPOCH 170 ...
Training Accuracy = 0.946
Validation Accuracy = 0.918

EPOCH 171 ...
Training Accuracy = 0.947
Validation Accuracy = 0.918

EPOCH 172 ...
Training Accuracy = 0.948
Validation Accuracy = 0.921

EPOCH 173 ...
Training Accuracy = 0.949
Validation Accuracy = 0.920

EPOCH 174 ...
Training Accuracy = 0.948
Validation Accuracy = 0.920

EPOCH 175 ...
Training Accuracy = 0.949
Validation Accuracy = 0.920

EPOCH 176 ...
Training Accuracy = 0.949
Validation Accuracy = 0.921

EPOCH 177 ...
Training Accuracy = 0.951
Validation Accuracy = 0.922

EPOCH 178 ...
Training Accuracy = 0.951
Validation Accuracy = 0.921

EPOCH 179 ...
Training Accuracy = 0.951
Validation Accuracy = 0.922

EPOCH 180 ...
Training Accuracy = 0.951
Validation Accuracy = 0.923

EPOCH 181 ...
Training Accuracy = 0.952
Validation Accuracy = 0.923

EPOCH 182 ...
Training Accuracy = 0.952
Validation Accuracy = 0.923

EPOCH 183 ...
Training Accuracy = 0.953
Validation Accuracy = 0.922

EPOCH 184 ...
Training Accuracy = 0.953
Validation Accuracy = 0.922

EPOCH 185 ...
Training Accuracy = 0.952
Validation Accuracy = 0.924

EPOCH 186 ...
Training Accuracy = 0.954
Validation Accuracy = 0.924

EPOCH 187 ...
Training Accuracy = 0.953
Validation Accuracy = 0.924

EPOCH 188 ...
Training Accuracy = 0.953
Validation Accuracy = 0.924

EPOCH 189 ...
Training Accuracy = 0.954
Validation Accuracy = 0.924

EPOCH 190 ...
Training Accuracy = 0.955
Validation Accuracy = 0.924

EPOCH 191 ...
Training Accuracy = 0.956
Validation Accuracy = 0.926

EPOCH 192 ...
Training Accuracy = 0.955
Validation Accuracy = 0.926

EPOCH 193 ...
Training Accuracy = 0.956
Validation Accuracy = 0.926

EPOCH 194 ...
Training Accuracy = 0.956
Validation Accuracy = 0.926

EPOCH 195 ...
Training Accuracy = 0.955
Validation Accuracy = 0.925

EPOCH 196 ...
Training Accuracy = 0.957
Validation Accuracy = 0.926

EPOCH 197 ...
Training Accuracy = 0.957
Validation Accuracy = 0.928

EPOCH 198 ...
Training Accuracy = 0.957
Validation Accuracy = 0.928

EPOCH 199 ...
Training Accuracy = 0.957
Validation Accuracy = 0.926

EPOCH 200 ...
Training Accuracy = 0.958
Validation Accuracy = 0.928

EPOCH 201 ...
Training Accuracy = 0.958
Validation Accuracy = 0.927

EPOCH 202 ...
Training Accuracy = 0.958
Validation Accuracy = 0.928

EPOCH 203 ...
Training Accuracy = 0.959
Validation Accuracy = 0.928

EPOCH 204 ...
Training Accuracy = 0.958
Validation Accuracy = 0.928

EPOCH 205 ...
Training Accuracy = 0.958
Validation Accuracy = 0.928

EPOCH 206 ...
Training Accuracy = 0.959
Validation Accuracy = 0.929

EPOCH 207 ...
Training Accuracy = 0.960
Validation Accuracy = 0.929

EPOCH 208 ...
Training Accuracy = 0.960
Validation Accuracy = 0.929

EPOCH 209 ...
Training Accuracy = 0.960
Validation Accuracy = 0.930

EPOCH 210 ...
Training Accuracy = 0.961
Validation Accuracy = 0.930

EPOCH 211 ...
Training Accuracy = 0.961
Validation Accuracy = 0.930

EPOCH 212 ...
Training Accuracy = 0.961
Validation Accuracy = 0.930

EPOCH 213 ...
Training Accuracy = 0.962
Validation Accuracy = 0.929

EPOCH 214 ...
Training Accuracy = 0.960
Validation Accuracy = 0.930

EPOCH 215 ...
Training Accuracy = 0.961
Validation Accuracy = 0.931

EPOCH 216 ...
Training Accuracy = 0.962
Validation Accuracy = 0.930

EPOCH 217 ...
Training Accuracy = 0.962
Validation Accuracy = 0.931

EPOCH 218 ...
Training Accuracy = 0.962
Validation Accuracy = 0.931

EPOCH 219 ...
Training Accuracy = 0.962
Validation Accuracy = 0.931

EPOCH 220 ...
Training Accuracy = 0.961
Validation Accuracy = 0.932

EPOCH 221 ...
Training Accuracy = 0.962
Validation Accuracy = 0.931

EPOCH 222 ...
Training Accuracy = 0.963
Validation Accuracy = 0.931

EPOCH 223 ...
Training Accuracy = 0.964
Validation Accuracy = 0.931

EPOCH 224 ...
Training Accuracy = 0.964
Validation Accuracy = 0.933

EPOCH 225 ...
Training Accuracy = 0.964
Validation Accuracy = 0.931

EPOCH 226 ...
Training Accuracy = 0.964
Validation Accuracy = 0.933

EPOCH 227 ...
Training Accuracy = 0.965
Validation Accuracy = 0.933

EPOCH 228 ...
Training Accuracy = 0.964
Validation Accuracy = 0.934

EPOCH 229 ...
Training Accuracy = 0.965
Validation Accuracy = 0.933

EPOCH 230 ...
Training Accuracy = 0.965
Validation Accuracy = 0.933

EPOCH 231 ...
Training Accuracy = 0.965
Validation Accuracy = 0.934

EPOCH 232 ...
Training Accuracy = 0.966
Validation Accuracy = 0.933

EPOCH 233 ...
Training Accuracy = 0.966
Validation Accuracy = 0.933

EPOCH 234 ...
Training Accuracy = 0.966
Validation Accuracy = 0.934

EPOCH 235 ...
Training Accuracy = 0.966
Validation Accuracy = 0.935

EPOCH 236 ...
Training Accuracy = 0.965
Validation Accuracy = 0.933

EPOCH 237 ...
Training Accuracy = 0.966
Validation Accuracy = 0.935

EPOCH 238 ...
Training Accuracy = 0.967
Validation Accuracy = 0.935

EPOCH 239 ...
Training Accuracy = 0.967
Validation Accuracy = 0.933

EPOCH 240 ...
Training Accuracy = 0.967
Validation Accuracy = 0.935

EPOCH 241 ...
Training Accuracy = 0.967
Validation Accuracy = 0.936

EPOCH 242 ...
Training Accuracy = 0.968
Validation Accuracy = 0.935

EPOCH 243 ...
Training Accuracy = 0.968
Validation Accuracy = 0.935

EPOCH 244 ...
Training Accuracy = 0.969
Validation Accuracy = 0.936

EPOCH 245 ...
Training Accuracy = 0.968
Validation Accuracy = 0.936

EPOCH 246 ...
Training Accuracy = 0.969
Validation Accuracy = 0.936

EPOCH 247 ...
Training Accuracy = 0.968
Validation Accuracy = 0.935

EPOCH 248 ...
Training Accuracy = 0.969
Validation Accuracy = 0.937

EPOCH 249 ...
Training Accuracy = 0.969
Validation Accuracy = 0.935

EPOCH 250 ...
Training Accuracy = 0.969
Validation Accuracy = 0.936

EPOCH 251 ...
Training Accuracy = 0.969
Validation Accuracy = 0.936

EPOCH 252 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 253 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 254 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 255 ...
Training Accuracy = 0.969
Validation Accuracy = 0.936

EPOCH 256 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 257 ...
Training Accuracy = 0.971
Validation Accuracy = 0.938

EPOCH 258 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 259 ...
Training Accuracy = 0.970
Validation Accuracy = 0.937

EPOCH 260 ...
Training Accuracy = 0.971
Validation Accuracy = 0.939

EPOCH 261 ...
Training Accuracy = 0.971
Validation Accuracy = 0.938

EPOCH 262 ...
Training Accuracy = 0.972
Validation Accuracy = 0.939

EPOCH 263 ...
Training Accuracy = 0.972
Validation Accuracy = 0.939

EPOCH 264 ...
Training Accuracy = 0.971
Validation Accuracy = 0.938

EPOCH 265 ...
Training Accuracy = 0.972
Validation Accuracy = 0.939

EPOCH 266 ...
Training Accuracy = 0.972
Validation Accuracy = 0.940

EPOCH 267 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 268 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 269 ...
Training Accuracy = 0.973
Validation Accuracy = 0.940

EPOCH 270 ...
Training Accuracy = 0.972
Validation Accuracy = 0.941

EPOCH 271 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 272 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 273 ...
Training Accuracy = 0.974
Validation Accuracy = 0.941

EPOCH 274 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 275 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 276 ...
Training Accuracy = 0.973
Validation Accuracy = 0.939

EPOCH 277 ...
Training Accuracy = 0.973
Validation Accuracy = 0.940

EPOCH 278 ...
Training Accuracy = 0.974
Validation Accuracy = 0.940

EPOCH 279 ...
Training Accuracy = 0.974
Validation Accuracy = 0.940

EPOCH 280 ...
Training Accuracy = 0.974
Validation Accuracy = 0.941

EPOCH 281 ...
Training Accuracy = 0.974
Validation Accuracy = 0.941

EPOCH 282 ...
Training Accuracy = 0.975
Validation Accuracy = 0.941

EPOCH 283 ...
Training Accuracy = 0.975
Validation Accuracy = 0.941

EPOCH 284 ...
Training Accuracy = 0.975
Validation Accuracy = 0.942

EPOCH 285 ...
Training Accuracy = 0.975
Validation Accuracy = 0.942

EPOCH 286 ...
Training Accuracy = 0.974
Validation Accuracy = 0.940

EPOCH 287 ...
Training Accuracy = 0.975
Validation Accuracy = 0.943

EPOCH 288 ...
Training Accuracy = 0.975
Validation Accuracy = 0.941

EPOCH 289 ...
Training Accuracy = 0.975
Validation Accuracy = 0.942

EPOCH 290 ...
Training Accuracy = 0.976
Validation Accuracy = 0.942

EPOCH 291 ...
Training Accuracy = 0.975
Validation Accuracy = 0.941

EPOCH 292 ...
Training Accuracy = 0.975
Validation Accuracy = 0.940

EPOCH 293 ...
Training Accuracy = 0.976
Validation Accuracy = 0.943

EPOCH 294 ...
Training Accuracy = 0.976
Validation Accuracy = 0.942

EPOCH 295 ...
Training Accuracy = 0.976
Validation Accuracy = 0.942

EPOCH 296 ...
Training Accuracy = 0.977
Validation Accuracy = 0.943

EPOCH 297 ...
Training Accuracy = 0.977
Validation Accuracy = 0.942

EPOCH 298 ...
Training Accuracy = 0.977
Validation Accuracy = 0.943

EPOCH 299 ...
Training Accuracy = 0.977
Validation Accuracy = 0.944

EPOCH 300 ...
Training Accuracy = 0.977
Validation Accuracy = 0.943

EPOCH 301 ...
Training Accuracy = 0.977
Validation Accuracy = 0.943

EPOCH 302 ...
Training Accuracy = 0.977
Validation Accuracy = 0.944

EPOCH 303 ...
Training Accuracy = 0.978
Validation Accuracy = 0.944

EPOCH 304 ...
Training Accuracy = 0.978
Validation Accuracy = 0.945

EPOCH 305 ...
Training Accuracy = 0.978
Validation Accuracy = 0.943

EPOCH 306 ...
Training Accuracy = 0.978
Validation Accuracy = 0.944

EPOCH 307 ...
Training Accuracy = 0.978
Validation Accuracy = 0.945

EPOCH 308 ...
Training Accuracy = 0.979
Validation Accuracy = 0.943

EPOCH 309 ...
Training Accuracy = 0.978
Validation Accuracy = 0.945

EPOCH 310 ...
Training Accuracy = 0.979
Validation Accuracy = 0.943

EPOCH 311 ...
Training Accuracy = 0.978
Validation Accuracy = 0.944

EPOCH 312 ...
Training Accuracy = 0.979
Validation Accuracy = 0.945

EPOCH 313 ...
Training Accuracy = 0.978
Validation Accuracy = 0.943

EPOCH 314 ...
Training Accuracy = 0.979
Validation Accuracy = 0.944

EPOCH 315 ...
Training Accuracy = 0.979
Validation Accuracy = 0.945

EPOCH 316 ...
Training Accuracy = 0.979
Validation Accuracy = 0.945

EPOCH 317 ...
Training Accuracy = 0.979
Validation Accuracy = 0.944

EPOCH 318 ...
Training Accuracy = 0.979
Validation Accuracy = 0.944

EPOCH 319 ...
Training Accuracy = 0.980
Validation Accuracy = 0.945

EPOCH 320 ...
Training Accuracy = 0.979
Validation Accuracy = 0.945

EPOCH 321 ...
Training Accuracy = 0.980
Validation Accuracy = 0.945

EPOCH 322 ...
Training Accuracy = 0.980
Validation Accuracy = 0.945

EPOCH 323 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 324 ...
Training Accuracy = 0.980
Validation Accuracy = 0.944

EPOCH 325 ...
Training Accuracy = 0.980
Validation Accuracy = 0.946

EPOCH 326 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 327 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 328 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 329 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 330 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 331 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 332 ...
Training Accuracy = 0.981
Validation Accuracy = 0.947

EPOCH 333 ...
Training Accuracy = 0.982
Validation Accuracy = 0.946

EPOCH 334 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 335 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 336 ...
Training Accuracy = 0.981
Validation Accuracy = 0.946

EPOCH 337 ...
Training Accuracy = 0.981
Validation Accuracy = 0.947

EPOCH 338 ...
Training Accuracy = 0.982
Validation Accuracy = 0.946

EPOCH 339 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 340 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 341 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 342 ...
Training Accuracy = 0.982
Validation Accuracy = 0.946

EPOCH 343 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 344 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 345 ...
Training Accuracy = 0.983
Validation Accuracy = 0.947

EPOCH 346 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 347 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 348 ...
Training Accuracy = 0.982
Validation Accuracy = 0.947

EPOCH 349 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 350 ...
Training Accuracy = 0.983
Validation Accuracy = 0.947

EPOCH 351 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 352 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 353 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 354 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 355 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 356 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 357 ...
Training Accuracy = 0.983
Validation Accuracy = 0.948

EPOCH 358 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 359 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 360 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 361 ...
Training Accuracy = 0.983
Validation Accuracy = 0.947

EPOCH 362 ...
Training Accuracy = 0.984
Validation Accuracy = 0.949

EPOCH 363 ...
Training Accuracy = 0.984
Validation Accuracy = 0.949

EPOCH 364 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 365 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 366 ...
Training Accuracy = 0.984
Validation Accuracy = 0.948

EPOCH 367 ...
Training Accuracy = 0.984
Validation Accuracy = 0.949

EPOCH 368 ...
Training Accuracy = 0.985
Validation Accuracy = 0.949

EPOCH 369 ...
Training Accuracy = 0.985
Validation Accuracy = 0.949

EPOCH 370 ...
Training Accuracy = 0.985
Validation Accuracy = 0.950

EPOCH 371 ...
Training Accuracy = 0.985
Validation Accuracy = 0.949

EPOCH 372 ...
Training Accuracy = 0.985
Validation Accuracy = 0.949

EPOCH 373 ...
Training Accuracy = 0.984
Validation Accuracy = 0.947

EPOCH 374 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 375 ...
Training Accuracy = 0.984
Validation Accuracy = 0.947

EPOCH 376 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 377 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 378 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 379 ...
Training Accuracy = 0.985
Validation Accuracy = 0.950

EPOCH 380 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 381 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 382 ...
Training Accuracy = 0.986
Validation Accuracy = 0.949

EPOCH 383 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 384 ...
Training Accuracy = 0.986
Validation Accuracy = 0.951

EPOCH 385 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 386 ...
Training Accuracy = 0.986
Validation Accuracy = 0.949

EPOCH 387 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 388 ...
Training Accuracy = 0.986
Validation Accuracy = 0.950

EPOCH 389 ...
Training Accuracy = 0.986
Validation Accuracy = 0.951

EPOCH 390 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 391 ...
Training Accuracy = 0.987
Validation Accuracy = 0.950

EPOCH 392 ...
Training Accuracy = 0.987
Validation Accuracy = 0.950

EPOCH 393 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 394 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 395 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 396 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 397 ...
Training Accuracy = 0.987
Validation Accuracy = 0.950

EPOCH 398 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 399 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 400 ...
Training Accuracy = 0.987
Validation Accuracy = 0.952

EPOCH 401 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 402 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 403 ...
Training Accuracy = 0.987
Validation Accuracy = 0.952

EPOCH 404 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 405 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 406 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 407 ...
Training Accuracy = 0.987
Validation Accuracy = 0.951

EPOCH 408 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 409 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 410 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 411 ...
Training Accuracy = 0.988
Validation Accuracy = 0.953

EPOCH 412 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 413 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 414 ...
Training Accuracy = 0.988
Validation Accuracy = 0.951

EPOCH 415 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 416 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 417 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 418 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 419 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 420 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 421 ...
Training Accuracy = 0.989
Validation Accuracy = 0.953

EPOCH 422 ...
Training Accuracy = 0.989
Validation Accuracy = 0.953

EPOCH 423 ...
Training Accuracy = 0.989
Validation Accuracy = 0.953

EPOCH 424 ...
Training Accuracy = 0.988
Validation Accuracy = 0.952

EPOCH 425 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 426 ...
Training Accuracy = 0.989
Validation Accuracy = 0.953

EPOCH 427 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 428 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 429 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 430 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 431 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 432 ...
Training Accuracy = 0.989
Validation Accuracy = 0.953

EPOCH 433 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 434 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 435 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 436 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 437 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 438 ...
Training Accuracy = 0.989
Validation Accuracy = 0.952

EPOCH 439 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 440 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 441 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 442 ...
Training Accuracy = 0.991
Validation Accuracy = 0.952

EPOCH 443 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 444 ...
Training Accuracy = 0.990
Validation Accuracy = 0.954

EPOCH 445 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 446 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 447 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 448 ...
Training Accuracy = 0.990
Validation Accuracy = 0.952

EPOCH 449 ...
Training Accuracy = 0.991
Validation Accuracy = 0.952

EPOCH 450 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 451 ...
Training Accuracy = 0.991
Validation Accuracy = 0.952

EPOCH 452 ...
Training Accuracy = 0.990
Validation Accuracy = 0.953

EPOCH 453 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 454 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 455 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 456 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 457 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 458 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 459 ...
Training Accuracy = 0.991
Validation Accuracy = 0.955

EPOCH 460 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 461 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 462 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 463 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954

EPOCH 464 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 465 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 466 ...
Training Accuracy = 0.991
Validation Accuracy = 0.955

EPOCH 467 ...
Training Accuracy = 0.992
Validation Accuracy = 0.953

EPOCH 468 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 469 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 470 ...
Training Accuracy = 0.992
Validation Accuracy = 0.953

EPOCH 471 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 472 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 473 ...
Training Accuracy = 0.992
Validation Accuracy = 0.953

EPOCH 474 ...
Training Accuracy = 0.991
Validation Accuracy = 0.955

EPOCH 475 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 476 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 477 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 478 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 479 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 480 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 481 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 482 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 483 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953

EPOCH 484 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 485 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 486 ...
Training Accuracy = 0.992
Validation Accuracy = 0.954

EPOCH 487 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 488 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 489 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955

EPOCH 490 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 491 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 492 ...
Training Accuracy = 0.993
Validation Accuracy = 0.954

EPOCH 493 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 494 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 495 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 496 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 497 ...
Training Accuracy = 0.993
Validation Accuracy = 0.954

EPOCH 498 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 499 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 500 ...
Training Accuracy = 0.993
Validation Accuracy = 0.957

EPOCH 501 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 502 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 503 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 504 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 505 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 506 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 507 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 508 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 509 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 510 ...
Training Accuracy = 0.993
Validation Accuracy = 0.957

EPOCH 511 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 512 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 513 ...
Training Accuracy = 0.993
Validation Accuracy = 0.957

EPOCH 514 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 515 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955

EPOCH 516 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 517 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 518 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 519 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 520 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 521 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 522 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 523 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 524 ...
Training Accuracy = 0.993
Validation Accuracy = 0.956

EPOCH 525 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 526 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 527 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 528 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 529 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 530 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 531 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 532 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 533 ...
Training Accuracy = 0.994
Validation Accuracy = 0.955

EPOCH 534 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 535 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 536 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 537 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 538 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 539 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956

EPOCH 540 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 541 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 542 ...
Training Accuracy = 0.995
Validation Accuracy = 0.956

EPOCH 543 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 544 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 545 ...
Training Accuracy = 0.994
Validation Accuracy = 0.957

EPOCH 546 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 547 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 548 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 549 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 550 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 551 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 552 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 553 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 554 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 555 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 556 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 557 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 558 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 559 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 560 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 561 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 562 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 563 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 564 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 565 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 566 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 567 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 568 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 569 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 570 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 571 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 572 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 573 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957

EPOCH 574 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 575 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 576 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 577 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 578 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 579 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 580 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 581 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958

EPOCH 582 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 583 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958

EPOCH 584 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 585 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 586 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 587 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 588 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959

EPOCH 589 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 590 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 591 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 592 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 593 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958

EPOCH 594 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 595 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 596 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 597 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 598 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 599 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 600 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 601 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958

EPOCH 602 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958

EPOCH 603 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 604 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 605 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 606 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 607 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 608 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 609 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 610 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 611 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 612 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 613 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 614 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 615 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 616 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 617 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 618 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 619 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 620 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 621 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 622 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 623 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 624 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 625 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 626 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 627 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960

EPOCH 628 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 629 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959

EPOCH 630 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 631 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959

EPOCH 632 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 633 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 634 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 635 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 636 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 637 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 638 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 639 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 640 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 641 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 642 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 643 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 644 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 645 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 646 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 647 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 648 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 649 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 650 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 651 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 652 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 653 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 654 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 655 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 656 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 657 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 658 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 659 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 660 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 661 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 662 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 663 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 664 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 665 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 666 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 667 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 668 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 669 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 670 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 671 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 672 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 673 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 674 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 675 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 676 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 677 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 678 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 679 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 680 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 681 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962

EPOCH 682 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 683 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 684 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 685 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 686 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 687 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 688 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 689 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 690 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 691 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 692 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 693 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 694 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960

EPOCH 695 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 696 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 697 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 698 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 699 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 700 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 701 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961

EPOCH 702 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 703 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 704 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 705 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 706 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 707 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 708 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 709 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 710 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 711 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 712 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 713 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 714 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 715 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 716 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 717 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 718 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 719 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 720 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 721 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 722 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 723 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 724 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 725 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 726 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 727 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961

EPOCH 728 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 729 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 730 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 731 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 732 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 733 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 734 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 735 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 736 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 737 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 738 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 739 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 740 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 741 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 742 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 743 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 744 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 745 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 746 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 747 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 748 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 749 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 750 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 751 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 752 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 753 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 754 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 755 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 756 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 757 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 758 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 759 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 760 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 761 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 762 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 763 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 764 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 765 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 766 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 767 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 768 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 769 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 770 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 771 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 772 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 773 ...
Training Accuracy = 0.998
Validation Accuracy = 0.962

EPOCH 774 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 775 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 776 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 777 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 778 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 779 ...
Training Accuracy = 0.999
Validation Accuracy = 0.962

EPOCH 780 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 781 ...
Training Accuracy = 0.999
Validation Accuracy = 0.962

EPOCH 782 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 783 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 784 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 785 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 786 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 787 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 788 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 789 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 790 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 791 ...
Training Accuracy = 0.999
Validation Accuracy = 0.962

EPOCH 792 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 793 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 794 ...
Training Accuracy = 0.998
Validation Accuracy = 0.963

EPOCH 795 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 796 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 797 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 798 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 799 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 800 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 801 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 802 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964

EPOCH 803 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 804 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 805 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 806 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 807 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 808 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 809 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 810 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 811 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 812 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 813 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 814 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 815 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 816 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 817 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 818 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 819 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 820 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 821 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 822 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 823 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 824 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 825 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 826 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 827 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 828 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 829 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 830 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 831 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 832 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 833 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 834 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 835 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 836 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 837 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 838 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 839 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 840 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 841 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 842 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 843 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 844 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 845 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 846 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 847 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 848 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 849 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 850 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 851 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 852 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 853 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 854 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 855 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 856 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 857 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 858 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 859 ...
Training Accuracy = 0.999
Validation Accuracy = 0.962

EPOCH 860 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 861 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 862 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 863 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 864 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 865 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 866 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 867 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 868 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 869 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 870 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 871 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 872 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 873 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 874 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 875 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 876 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 877 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 878 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 879 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 880 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 881 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 882 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 883 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 884 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 885 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 886 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 887 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 888 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 889 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 890 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 891 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 892 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 893 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 894 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 895 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 896 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 897 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 898 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 899 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 900 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 901 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 902 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 903 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 904 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 905 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 906 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 907 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 908 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 909 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 910 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 911 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 912 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 913 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 914 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 915 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 916 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 917 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 918 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 919 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 920 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 921 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 922 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 923 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 924 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 925 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 926 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 927 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 928 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 929 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 930 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 931 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 932 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 933 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 934 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 935 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 936 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 937 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 938 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 939 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 940 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 941 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 942 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 943 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 944 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 945 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 946 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 947 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 948 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 949 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 950 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 951 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 952 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 953 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 954 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 955 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 956 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 957 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 958 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 959 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 960 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 961 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 962 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 963 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 964 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 965 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 966 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 967 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 968 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 969 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 970 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 971 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 972 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 973 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 974 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 975 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 976 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 977 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 978 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 979 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 980 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 981 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 982 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 983 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 984 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 985 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 986 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 987 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 988 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 989 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 990 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 991 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 992 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 993 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 994 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 995 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 996 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 997 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 998 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 999 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1000 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1001 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1002 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1003 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1004 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1005 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1006 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 1007 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1008 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1009 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1010 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1011 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1012 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1013 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1014 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1015 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1016 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1017 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1018 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1019 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1020 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1021 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1022 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1023 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1024 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1025 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1026 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1027 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1028 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1029 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1030 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 1031 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1032 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1033 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1034 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1035 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1036 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1037 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1038 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1039 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1040 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1041 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1042 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1043 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1044 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1045 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1046 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1047 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1048 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1049 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1050 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1051 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1052 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1053 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1054 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1055 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1056 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1057 ...
Training Accuracy = 0.999
Validation Accuracy = 0.966

EPOCH 1058 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1059 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1060 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1061 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1062 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1063 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1064 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1065 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1066 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1067 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1068 ...
Training Accuracy = 0.999
Validation Accuracy = 0.963

EPOCH 1069 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1070 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1071 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1072 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1073 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1074 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1075 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1076 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1077 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1078 ...
Training Accuracy = 0.999
Validation Accuracy = 0.964

EPOCH 1079 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1080 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1081 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1082 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1083 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1084 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1085 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1086 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1087 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1088 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1089 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1090 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1091 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1092 ...
Training Accuracy = 0.999
Validation Accuracy = 0.965

EPOCH 1093 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1094 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1095 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1096 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1097 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1098 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1099 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1100 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1101 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1102 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1103 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1104 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1105 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1106 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1107 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1108 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1109 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1110 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1111 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1112 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1113 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1114 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1115 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1116 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1117 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1118 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1119 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1120 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1121 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1122 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1123 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1124 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1125 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1126 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1127 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1128 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1129 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1130 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1131 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1132 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1133 ...
Training Accuracy = 1.000
Validation Accuracy = 0.963

EPOCH 1134 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1135 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1136 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1137 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1138 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1139 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1140 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1141 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1142 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1143 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1144 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1145 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1146 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1147 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1148 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1149 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1150 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1151 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1152 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1153 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1154 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1155 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1156 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1157 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1158 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1159 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1160 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1161 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1162 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1163 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1164 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1165 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1166 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1167 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1168 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1169 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1170 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1171 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1172 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1173 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1174 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1175 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1176 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1177 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1178 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1179 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1180 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1181 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1182 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1183 ...
Training Accuracy = 1.000
Validation Accuracy = 0.967

EPOCH 1184 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1185 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1186 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1187 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1188 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1189 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1190 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1191 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1192 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1193 ...
Training Accuracy = 1.000
Validation Accuracy = 0.964

EPOCH 1194 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1195 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

EPOCH 1196 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1197 ...
Training Accuracy = 1.000
Validation Accuracy = 0.967

EPOCH 1198 ...
Training Accuracy = 1.000
Validation Accuracy = 0.967

EPOCH 1199 ...
Training Accuracy = 1.000
Validation Accuracy = 0.966

EPOCH 1200 ...
Training Accuracy = 1.000
Validation Accuracy = 0.965

Model saved
In [25]:
if in_training:
    plt.figure(1)
    plt.plot(epochs, accuracies_training,'b.', label="Training")
    plt.plot(epochs, accuracies_validation, 'r-', label="Validation")
    plt.xlabel("Epochs")
    plt.ylabel("Accuracy")
    plt.title("Learning Curve")
    plt.legend(loc='best') # place legend to avoid overlapping with curves.
    plt.figure(2)
    plt.plot(epochs, cross_entropy_mean_training, 'g.', label="Training")
    plt.plot(epochs, cross_entropy_mean_validation, 'r-', label="Validation")
    plt.xlabel("Epochs")
    plt.ylabel("Cross_entropy_mean")
    plt.title("Learning Curve")
    plt.legend(loc='best') # place legend to avoid overlapping with curves.
In [26]:
if in_training:
    highest_validation_accuracy_idx = np.argmax(accuracies_validation)
    print(str.format('The highest validation accuracy: {:.3f} reached at EPOCHS: {}', 
                     accuracies_validation[highest_validation_accuracy_idx], highest_validation_accuracy_idx))
The highest validation accuracy: 0.967 reached at EPOCHS: 1197
In [27]:
def predictions(xx, yy, k = 3):
    sess = tf.get_default_session()
    tops = tf.nn.top_k(tf.nn.softmax(logits), k) 
    soft_max_tops, top_indices = sess.run(tops, feed_dict = {x: xx, y: yy, keep_prob: 1.0})
    
    rank_of_target_class_and_classified_class = [
        [top_indices[i].tolist().index(yy[i]) if yy[i] in top_indices[i].tolist() else k, top_indices[i][0]] 
            for i in range(len(yy))]
    rank_of_target_class = []
    classified_class = []
    for i in range(len(rank_of_target_class_and_classified_class)):
        rank_of_target_class.append(rank_of_target_class_and_classified_class[i][0])
        classified_class.append(rank_of_target_class_and_classified_class[i][1])
    return [rank_of_target_class, classified_class, soft_max_tops, top_indices]
In [28]:
top_pick = 5
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    rank_of_target_class, classified_class, soft_max, top_indices = predictions(X_validation, y_validation, 
                                                                                k = top_pick)

import showClassifications
from showClassifications import show_classifications
from imp import reload
reload(showClassifications)
rank_counts = show_classifications(rank_of_target_class, classified_class, X_validation, y_validation, 
                                k = top_pick, limit = 10)
Correct recognition: 96.54%, partial samples:
The target class as second guesses: 2.09%, partial samples:
The target class as third guesses: 0.62%, partial samples:
The target class as 4-th guesses: 0.27%, partial samples:
The target class as 5-th guesses: 0.11%, partial samples:
The target class as beyond 5-th guesses: 0.37%, partial samples:
In [29]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    test_accuracy = evaluate(X_test, y_test)
    print("Test Accuracy = {:.3f}".format(test_accuracy))
Test Accuracy = 0.939

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

I use the recommended optimizer used by LeNet implementation from Udacity SDC course.

After experiments, I use rather large batch size, 10000, as my computer can afford the memory resources, and the large batch size may help to implement more accurate gradient decent, so as to learn more accurately, and fast.

With the increased batch size, the EPOCHS numbers has to be increased, as in each EPOCH, the number of backpropagations thus training is once per batch. Experiments helps to determine the number of EPOCHS to be around 500.

Also with large batch size, the amount of weights update per iteration becomes very large. Experiments show that the learning rate has to be rather small, around 0.001, as is the coefficient for weights regualarization, 0.007. If the learning rate is larger than or equal to 0.01. The accuracies of training, and validation would severely fluctuate.

Although I gave up the mechanism of drop-out, rather just use weights regularization to address the slight problem of overfitting. But I observed that with drop-out, the learning rate should be further reduced from 0.001 to 0.0007 to accomodate the drop-out scheme, with which there is compensation of additional weight changes to the weights not being dropped.

Question 5

What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.

Answer:

I choosed LeNet as the basis of my solutions. The reasons that I choose it is due to the recommendation of David, the instructor. He demonstrated that LeNet with bare minimum adaption would be able to reach training accuracy of 96%. It's very impressive performance to start with. Besides I've read LeCunn's paper. It further made me confident with the architecture as the starting point to explore.

Based on LeNet, with the minimum adaptation, I was able to reproduce the intial performance. Then I have been working on to improve the validation performance further.

By plotting the learning curve of the accuracies of training, and validations. I can observed the performance of training. Given the already remarkable performance, I have been mainly doing fine tuning to focusing on to improve the validation accuracy.

I noticed that the training and validation are quite synchronized in progress of training. There seemed no problem of overfitting, but any further improvement would be from improve the fitting of the model with the training data.

I started with tuning the batch size. I reasoned that with bigger batch size, the gradient decent updates would approximate closer to the theorectical algorithm. As my own computer has plenty of memory, and even plenty with its GPU, so I experiment to increase the batch size, and eventually find it still comfortable at 10000.

However, with increased the BATCH_SIZE , I noticed that the performance actually severely reduced to around 25%, with the original EPOCHS of 10. Upon Googling, I realized that with the significantly increased batch size, the number of times of weights update is significantly reduced in one epoch, as the weights are only updated once in one batch. So I increased the value of EPOCHS to the arrange of 500 (the theorectical equivalent would be 700). The I observed recovered and even improved performance around 98% of validation accuracy.

By increasing the BATCH_SIZE, with the original learning rate, I also observed some oscilation of accuracies. I figured that it's due the much larger BATCH_SIZE, that the amount of weights update may be accumulated with very siginificant amount, so the learning curve has to be much reduced. I experiment to find that the learning rate has to be as small as 0.0001. With such small learning rate, the training is not fast. It usually takes about 3 hours to complete traning to reach the level of 98.5% validation accuracy.

In order to improve further the validation accuracy, I observed than some training samples are very dark, some of washed color. I figured that some kind of color space transformation might help to improve the classification. But I am not sure which scheme would work better. So I decide to use 1x1 convolution filter with depth 3 at the input layer. As these would be plausible to train an adaptive color space transformation. The experiment turned to be very successful, the performance improved about 0.1%.

I then found that there were about 0.2% gap between the training accuracy, and vailidation accuracy. I guessed that it might be due some overfitting. So I decided to try drop-out. I experiment a good arrangement is to have drop-out only at the last fully connected layer, with keep-probability at 50%. The validation accuracy eventually reached 99%, with EPOCHS at 500.

In order to further improve, I tried to increase the depth of the convolution layer 1, from 6 to 47. It helped somehow to improve the validation accuracy for about 0.1%.

However, I noticed that it still perform very poorly winh new images which I collected. It even failed at some very images of stop signs and some speed limit sign of 20km/h. They are very indentifiable with human eyes. It amazed me that the classifier was able to classify some other samples which are quite challenge to my own eyes.

So I decide to try processing of standardization. It didn't help much, but not make the performance worse.

Finally, I started to analyze the distribution of the training samples, and was alarmed by the observations that the numbers of samples are highly imbalanced. Especially, there is strong correlation between the those under represeneth signs and those that were poorly classified.

For those signs whose numbers are around the mean with some margin, I added more samples by transforming from the existing samples of the same sign, by random rotation, and translation, and affine transformation.

After the sample augmentation, I found out that the training was much harder to achieve the previous level of accuracy. With validation samples not being augmented, the validation accuracy would be even higher than the training accuracy. This may indicates that I need adjust the hyper parameters, may even need to add more parameters.

Even though, the training/validation accuracies sufffer. But with the new samples, the classification looks much more reasonable. Some of those apparent signs were correctly recognized.

Upon more expeirements, I found that the amount of augmentation may not be enough, I've increased the amount of augumentation. For samples of signs below the 2 times of the mean, they are increased to 2 times of the mean.

Although I've implenemeted weights regularization, but it seems that the problem of over fitting is not severe, so I decided to remove the weights regularization for now.

I had to reduce the BATCH_SIZE to 5000, as I ran into problem of GPU memory shortage. It seems that there is no impact on the training performance.

Finally, with 1200 EPOCHS, the outcome is as follows:

  • Training accuracy: 100%
  • Validation accuracy: 96.5%
  • Test accuracy: 93.9%

There seems some problem of over-fitting. I might try to reduce the size of hidden layer 1 (fc1), which I have increased the size from 120 to 150. But due to time limitation, I'll try it later.


Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

The following code prepare the data for the new samples collected from Internet. The corresponding label data is also been prepared (both X and y data are prepared as X_new, and y_new.)

In [30]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os

import matplotlib.image as mpimg
name_to_sign_id = {
    '30-1': 1,
    '30': 1,
    'animal': 31,
    'curve-1': 19,
    'curve': 19,
    'keep-right': 38,
    'left': 34,
    'no-entry': 17,
    'stop-chinese': 14,
    'stop-distorted': 14,
    'stop': 14,
    'stop1': 14,
    'yield': 13        
}

files = os.listdir('./new-samples/')

lst = [np.array(mpimg.imread(os.path.join('./new-samples/', file))) for file in files]
assert(all(l.shape == (32, 32, 3) for l in lst))  
X_new = np.array(lst)

y_new = np.array([name_to_sign_id[os.path.splitext(file)[0]] for file in files])

Here are the new samples:

In [31]:
display_samples(X_new, y_new, 'class', np.array([]), '', sign_id_to_name, columns=5)
In [32]:
### Run the predictions here.
### Feel free to use as many code cells as needed.
top_picks = 5
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    guesses_n, top_picks_n, soft_max_tops, top_indices = predictions(X_new, y_new, k = top_picks)   
In [33]:
import importlib
importlib.reload(showClassifications)
stats_n = show_classifications(guesses_n, top_picks_n, X_new, y_new, k = top_picks, limit = 10)
stats_n
Correct recognition: 58.33%, partial samples:
The target class as second guesses: 8.33%, partial samples:
The target class as beyond 5-th guesses: 33.33%, partial samples:
Out[33]:
array([7, 1, 0, 0, 0, 4])

Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.

Answer:

I'm surprized that the classifier still does poorly with slight variations in the images. It seems that the size of the signs in the image and resolution of the images matter a lot. The images that I callected tend to be larger than those in the original collection. And the resolutions of some of the images are poor.

Question 7

Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.

NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.

Answer:

My model does rather poorly on the new captured pictures. With the newly captured pictures, the accuracy is only about 50%, while with the testing images in the orginal collection, the accuracy is as high as 94.5%.

In [34]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
In [75]:
ls_maps = []
for i in range(len(y_new)):
    map_sign_id_to_confidence = {}
    for j in range(top_indices.shape[1]):
        map_sign_id_to_confidence[top_indices[i, j]] = soft_max_tops[i, j]
    ls_maps.append(map_sign_id_to_confidence)
keys_ = []
for m in ls_maps:
    keys_ = keys_ + list(m.keys())

keys_ = sorted(list(set(keys_)))

confidence_matrix = []

for i in range(len(y_new)):
    row = np.array([ls_maps[i].get(key, 0) for key in keys_])
    confidence_matrix.append(row)
    
confidence_matrix = np.array(confidence_matrix)

# The column labels are the signs predicted.
column_labels = [sign_id_to_name[id] for id in keys_]

# The row labels are the new samples collected
row_labels = [sign_id_to_name[id] for id in y_new]

plt.matshow(confidence_matrix, cmap='gist_heat')
plt.xticks(range(confidence_matrix.shape[1]), column_labels, rotation = 90)
plt.yticks(range(confidence_matrix.shape[0]), row_labels)
plt.gca().set_xticks([x - 0.5 for x in plt.gca().get_xticks()][1:], minor='true')
plt.gca().set_yticks([y - 0.5 for y in plt.gca().get_yticks()][1:], minor='true')
plt.grid(which='minor')
plt.colorbar()
plt.show()

Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

Answer:

In the above heatmap of softmax probabilities, the rows are for the new samples colleceted. The columns are of the predicted signs by the classifier.

It seems that the classifier is quite certain about the "Stop", "Yield", "No entry", "Wild animals crossing", and one of the "Speed limit (30km/h)" correctly. It amazes my that it can classify one "Stop" in Chinese!

It somehow confidently classifies another "Speed limit (30km/h)" to be "Speed limit (80km/h)", given their similarity, it's understandable.

It somehow confidently mistaken to classify "Keep right", "Turn left ahead", and two "Dangerous curv to the left".

The problems with two "Dangerous curv to the left" might be partially due to my own labeling. It seems the labeling is questionable.

It's also quice certain about "Dangerous curve to the left". But the classification is not correct, it classifies as "Double curve" as the highest probability. The correct prediction is at the fifth prediction with very week probability. Maybe, my labeling of the image might not be correct as "Dangerous curve to the left".

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.